提高强化学习样本效率的一种有希望的方法是基于模型的方法,其中在学习模型中可以进行许多探索和评估以节省现实世界样本。但是,当学习模型具有不可忽略的模型误差时,很难准确评估模型中的顺序步骤,从而限制了模型的利用率。本文建议通过引入多步计划来替换基于模型的RL的多步骤操作来减轻此问题。我们采用多步计划价值估计,该估计在执行给定状态的一系列操作计划后评估预期的折扣收益,并通过直接通过计划价值估计来直接计算多步策略梯度来更新策略。新的基于模型的强化学习算法MPPVE(基于模型的计划策略学习具有多步计划价值估计)显示了对学习模型的利用率更好,并且比基于ART模型的RL更好地实现了样本效率方法。
translated by 谷歌翻译
Previous databases have been designed to further the development of fake audio detection. However, fake utterances are mostly generated by altering timbre, prosody, linguistic content or channel noise of original audios. They ignore a fake situation, in which the attacker manipulates an acoustic scene of the original audio with another forgery one. It will pose a major threat to our society if some people misuse the manipulated audio with malicious purpose. Therefore, this motivates us to fill in the gap. This paper designs such a dataset for scene fake audio detection (SceneFake). A manipulated audio in the SceneFake dataset involves only tampering the acoustic scene of an utterance by using speech enhancement technologies. We can not only detect fake utterances on a seen test set but also evaluate the generalization of fake detection models to unseen manipulation attacks. Some benchmark results are described on the SceneFake dataset. Besides, an analysis of fake attacks with different speech enhancement technologies and signal-to-noise ratios are presented on the dataset. The results show that scene manipulated utterances can not be detected reliably by the existing baseline models of ASVspoof 2019. Furthermore, the detection of unseen scene manipulation audio is still challenging.
translated by 谷歌翻译
进行了许多有效的尝试进行了DeepFake音频检测。但是,他们只能区分真实和假货。对于许多实际的应用程序方案,还需要哪种工具或算法生成DeepFake音频。这提出了一个问题:我们可以检测到DeepFake音频的系统指纹吗?因此,本文进行了初步研究,以检测DeepFake音频的系统指纹。实验是从五个最新的深入学习语音合成系统的DeepFake音频数据集上进行的。结果表明,LFCC功能相对适合系统指纹检测。此外,RESNET在基于LCNN和X-Vector模型中获得了最佳检测结果。T-SNE可视化表明,不同的语音合成系统会生成不同的系统指纹。
translated by 谷歌翻译
已经进行了许多有效的尝试来进行虚假的音频检测。但是,他们只能提供检测结果,但没有对抗这种伤害的对策。对于许多相关的实际应用,也需要哪种模型或算法生成假音频。因此,我们提出了一个新问题,用于检测虚假音频的Vocoder指纹。实验是在由八个最先进的歌手合成的数据集上进行的。我们已经初步探索了功能和模型体系结构。T-SNE可视化表明,不同的Vocoder会生成不同的Vocoder指纹。
translated by 谷歌翻译
现有的假音频检测系统通常依靠专家经验来设计声学功能或手动设计网络结构的超参数。但是,人工调整参数可能会对结果产生相对明显的影响。几乎不可能手动设置最佳参数集。因此,本文提出了一种完全自动化的终端伪造音频检测方法。我们首先使用WAV2VEC预训练模型来获得语音的高级表示。此外,对于网络结构,我们使用了名为Light-Darts的可区分体系结构搜索(飞镖)的修改版本。它学习了深厚的语音表示,同时自动学习和优化包括卷积操作和残留块组成的复杂神经结构。 ASVSPOOF 2019 LA数据集的实验结果表明,我们提出的系统达到的错误率(EER)为1.08%,这表现优于最先进的单个系统。
translated by 谷歌翻译
前列腺癌是美国男性癌症死亡的第二大原因。前列腺MRI的诊断通常依赖于准确的前列腺区域分割。但是,最新的自动分割方法通常无法产生前列腺区域的含有良好的体积分割,因为某些切片的前列腺MRI(例如碱基和顶点片)比其他切片更难分割。可以通过考虑相邻切片之间的跨片段关系来克服这一困难,但是当前的方法不能完全学习和利用这种关系。在本文中,我们提出了一种新型的跨板夹心注意机制,我们在变压器模块中使用该机制,以系统地学习不同尺度的跨斜纹关系。该模块可以在任何基于Skip Connections的现有基于学习的细分框架中使用。实验表明,我们的跨板块注意力能够捕获前列腺区域分割中的跨板片信息,并提高当前最新方法的性能。我们的方法提高了外围区域的分割精度,从而使所有前列腺切片(Apex,Mid-Gland和Base)的分割结果保持一致。
translated by 谷歌翻译
迄今为止,纳米级的活细胞成像仍然具有挑战性。尽管超分辨率显微镜方法使得能够在光学分辨率下方的亚细胞结构的可视化,但空间分辨率仍然足够远,对于体内生物分子的结构重建仍然足够远(即24nm厚度的微管纤维)。在这项研究中,我们提出了一种A-Net网络,并显示通过基于劣化模型的DWDC算法组合A-Net DeeD学习网络,可以显着改善由共聚焦显微镜捕获的细胞骨架图像的分辨率。利用DWDC算法构建新数据集并利用A-Net神经网络的特征(即,层数较少),我们成功地消除了噪声和絮凝结构,最初干扰了原始图像中的蜂窝结构,并改善了空间分辨率使用相对较小的数据集10次。因此,我们得出结论,将A-Net神经网络与DWDC方法结合的所提出的算法是一种合适的和普遍的方法,用于从低分辨率图像中严格的生物分子,细胞和器官的结构细节。
translated by 谷歌翻译
特殊设备产品的设计或仿真分析必须遵循国家标准,因此可能有必要反复参考设计过程中标准的内容。但是,基于关键字检索的传统问题应答系统很难提供准确的技术问题的答案。因此,我们使用自然语言处理技术来设计用于压力容器设计中的决策过程的问题应答系统。为了解决技术问题应答系统的培训数据不足的问题,我们提出了一种根据来自几个不同维度的声明性句子生成问题的方法,以便可以从声明性句子获得多个问题答案对。此外,我们设计了一种基于双向长期短期存储器(BILSTM)网络的交互式注意模型,以提高两个问题句子的相似性比较的性能。最后,在公共和技术域数据集中测试了问题应答系统的性能。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译